Information-Theoretic Generalization Bounds for Meta-Learning and Applications
نویسندگان
چکیده
Meta-learning, or "learning to learn", refers techniques that infer an inductive bias from data corresponding multiple related tasks with the goal of improving sample efficiency for new, previously unobserved, tasks. A key performance measure meta-learning is meta-generalization gap, is, difference between average loss measured on meta-training and a randomly selected task. This paper presents novel information-theoretic upper bounds gap. Two broad classes algorithms are considered uses either separate within-task training test sets, like MAML, joint Reptile. Extending existing work conventional learning, bound gap derived former class depends mutual information (MI) output algorithm its input data. For latter, includes additional MI per-task learning procedure set capture uncertainty. Tighter then developed, under given technical conditions, two via Individual Task (ITMI) bounds. Applications finally discussed, including noisy iterative meta-learning.
منابع مشابه
Generalization Bounds for Learning Kernels
This paper presents several novel generalization bounds for the problem of learning kernels based on a combinatorial analysis of the Rademacher complexity of the corresponding hypothesis sets. Our bound for learning kernels with a convex combination of p base kernels using L1 regularization admits only a √ log p dependency on the number of kernels, which is tight and considerably more favorable...
متن کاملInformation-theoretic analysis of generalization capability of learning algorithms
We derive upper bounds on the generalization error of a learning algorithm in terms of the mutual information between its input and output. The bounds provide an information-theoretic understanding of generalization in learning problems, and give theoretical guidelines for striking the right balance between data fit and generalization by controlling the input-output mutual information. We propo...
متن کاملGeneralization Analysis for Game-Theoretic Machine Learning
For Internet applications like sponsored search, cautions need to be taken when using machine learning to optimize their mechanisms (e.g., auction) since selfinterested agents in these applications may change their behaviors (and thus the data distribution) in response to the mechanisms. To tackle this problem, a framework called game-theoretic machine learning (GTML) was recently proposed, whi...
متن کاملGeneralization Bounds for Linear Learning Algorithms
We study generalization properties of linear learning algorithms and develop a data dependent approach that is used to derive generalization bounds that depend on the margin distribution. Our method makes use of random projection techniques to allow the use of existing VC dimension bounds in the effective, lower, dimension of the data. Comparisons with existing generalization bound show that ou...
متن کاملGeneralization bounds for learning weighted automata
This paper studies the problem of learning weighted automata from a finite sample of strings with real-valued labels. We consider several hypothesis classes of weighted automata defined in terms of three different measures: the norm of an automaton’s weights, the norm of the function computed by an automaton, and the norm of the corresponding Hankel matrix. We present new data-dependent general...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Entropy
سال: 2021
ISSN: ['1099-4300']
DOI: https://doi.org/10.3390/e23010126